8 research outputs found

    A Survey of Knowledge Graph Reasoning on Graph Types: Static, Dynamic, and Multimodal

    Full text link
    Knowledge graph reasoning (KGR), aiming to deduce new facts from existing facts based on mined logic rules underlying knowledge graphs (KGs), has become a fast-growing research direction. It has been proven to significantly benefit the usage of KGs in many AI applications, such as question answering, recommendation systems, and etc. According to the graph types, existing KGR models can be roughly divided into three categories, i.e., static models, temporal models, and multi-modal models. Early works in this domain mainly focus on static KGR, and recent works try to leverage the temporal and multi-modal information, which are more practical and closer to real-world. However, no survey papers and open-source repositories comprehensively summarize and discuss models in this important direction. To fill the gap, we conduct a first survey for knowledge graph reasoning tracing from static to temporal and then to multi-modal KGs. Concretely, the models are reviewed based on bi-level taxonomy, i.e., top-level (graph types) and base-level (techniques and scenarios). Besides, the performances, as well as datasets, are summarized and presented. Moreover, we point out the challenges and potential opportunities to enlighten the readers. The corresponding open-source repository is shared on GitHub https://github.com/LIANGKE23/Awesome-Knowledge-Graph-Reasoning.Comment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    TMac: Temporal Multi-Modal Graph Learning for Acoustic Event Classification

    Full text link
    Audiovisual data is everywhere in this digital age, which raises higher requirements for the deep learning models developed on them. To well handle the information of the multi-modal data is the key to a better audiovisual modal. We observe that these audiovisual data naturally have temporal attributes, such as the time information for each frame in the video. More concretely, such data is inherently multi-modal according to both audio and visual cues, which proceed in a strict chronological order. It indicates that temporal information is important in multi-modal acoustic event modeling for both intra- and inter-modal. However, existing methods deal with each modal feature independently and simply fuse them together, which neglects the mining of temporal relation and thus leads to sub-optimal performance. With this motivation, we propose a Temporal Multi-modal graph learning method for Acoustic event Classification, called TMac, by modeling such temporal information via graph learning techniques. In particular, we construct a temporal graph for each acoustic event, dividing its audio data and video data into multiple segments. Each segment can be considered as a node, and the temporal relationships between nodes can be considered as timestamps on their edges. In this case, we can smoothly capture the dynamic information in intra-modal and inter-modal. Several experiments are conducted to demonstrate TMac outperforms other SOTA models in performance. Our code is available at https://github.com/MGitHubL/TMac.Comment: This work has been accepted by ACM MM 2023 for publicatio

    Structure Guided Multi-modal Pre-trained Transformer for Knowledge Graph Reasoning

    Full text link
    Multimodal knowledge graphs (MKGs), which intuitively organize information in various modalities, can benefit multiple practical downstream tasks, such as recommendation systems, and visual question answering. However, most MKGs are still far from complete, which motivates the flourishing of MKG reasoning models. Recently, with the development of general artificial architectures, the pretrained transformer models have drawn increasing attention, especially for multimodal scenarios. However, the research of multimodal pretrained transformer (MPT) for knowledge graph reasoning (KGR) is still at an early stage. As the biggest difference between MKG and other multimodal data, the rich structural information underlying the MKG still cannot be fully leveraged in existing MPT models. Most of them only utilize the graph structure as a retrieval map for matching images and texts connected with the same entity. This manner hinders their reasoning performances. To this end, we propose the graph Structure Guided Multimodal Pretrained Transformer for knowledge graph reasoning, termed SGMPT. Specifically, the graph structure encoder is adopted for structural feature encoding. Then, a structure-guided fusion module with two different strategies, i.e., weighted summation and alignment constraint, is first designed to inject the structural information into both the textual and visual features. To the best of our knowledge, SGMPT is the first MPT model for multimodal KGR, which mines the structural information underlying the knowledge graph. Extensive experiments on FB15k-237-IMG and WN18-IMG, demonstrate that our SGMPT outperforms existing state-of-the-art models, and prove the effectiveness of the designed strategies.Comment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accesse

    Identification of hub genes in digestive system of mandarin fish (Siniperca chuatsi) fed with artificial diet by weighted gene co-expression network analysis

    No full text
    Mandarin fish (Siniperca chuatsi) is a carnivorous freshwater fish and an economically important species. The digestive system (liver, stomach, intestine, pyloric caecum, esophagus, and gallbladder) is an important site for studying fish domestication. In our previous study, we found that mandarin fish undergoes adaptive changes in histological morphology and gene expression levels of the digestive system when subjected to artificial diet domestication. However, we are not clear which hub genes are highly associated with domestication. In this study, we performed WGCNA on the transcriptomes of 17 tissues and 9 developmental stages and combined differentially expressed genes analysis in the digestive system to identify the hub genes that may play important functions in the adaptation of mandarin fish to bait conversion. A total of 31,657 genes in 26 samples were classified into 23 color modules via WGCNA. The modules midnightblue, darkred, lightyellow, and darkgreen highly associated with the liver, stomach, esophagus, and gallbladder were extracted, respectively. Tan module was highly related to both intestine and pyloric caecum. The hub genes in liver were cp, vtgc, c1in, c9, lect2, and klkb1. The hub genes in stomach were ghrl, atp4a, gjb3, muc5ac, duox2, and chia2. The hub genes in esophagus were mybpc1, myl2, and tpm3. The hub genes in gallbladder were dyst, npy2r, slc13a1, and slc39a4. The hub genes in the intestine and pyloric caecum were slc15a1, cdhr5, btn3a1, anpep, slc34a2, cdhr2, and ace2. Through pathway analysis, modules highly related to the digestive system were mainly enriched in digestion and absorption, metabolism, and immune-related pathways. After domestication, the hub genes vtgc and lect2 were significantly upregulated in the liver. Chia2 was significantly downregulated in the stomach. Slc15a1, anpep, and slc34a2 were significantly upregulated in the intestine. This study identified the hub genes that may play an important role in the adaptation of the digestive system to artificial diet, which provided novel evidence and ideas for further research on the domestication of mandarin fish from molecular level.Key National and Special Project of Blue Granary Science and Technology Innovation 202008967002, China Scholarship Council 5101229170829, Training plan for applied talents integrating industry and education - Collage of Future Technology 2020YFD0900400,info:eu-repo/semantics/publishedVersio

    High seismic velocity structures control moderate to strong induced earthquake behaviors by shale gas development

    No full text
    Abstract Moderate to strong earthquakes have been induced worldwide by shale gas development, however, it is still unclear what factors control their behaviors. Here we use local seismic networks to reliably determine the source attributes of dozens of M > 3 earthquakes and obtain a high-resolution shear-wave velocity model using ambient noise tomography. These earthquakes are found to occur close to the target shale formations in depth and along high seismic velocity boundaries. The magnitudes and co-seismic slip distributions of the 2018 Xingwen ML5.7{M}_{{{{{{\rm{L}}}}}}}5.7 M L 5.7 and 2019 Gongxian ML5.3{M}_{{{{{{\rm{L}}}}}}}5.3 M L 5.3 earthquakes are further determined jointly by seismic waveforms and InSAR data, and the co-seismic slips of these two earthquakes correlate with high seismic velocity zones along the fault planes. Thus, the distribution of high velocity zones near the target shale formations, together with the stress state modulated by hydraulic fracturing controls induced earthquake behaviors and is critical for understanding the seismic potentials of hydraulic fracturing
    corecore